EN FR
EN FR


Section: Partnerships and Cooperations

European Initiatives

QUAERO (Inria)

Participant : Ivan Laptev.

QUAERO (AII) is a European collaborative research and development program with the goal of developing multimedia and multi-lingual indexing and management tools for professional and public applications. Quaero consortium involves 24 academic and industrial partners leaded by Technicolor (previously Thomson). Willow participates in work package 9 “Video Processing” and leads work on motion recognition and event recognition tasks.

EIT-ICT: Cross-linking Visual Information and Internet Resources using Mobile Networks (Inria)

Participants : Ivan Laptev, Josef Sivic.

The goal of this project within the European EIT-ICT activity is to perform basic research in the area of semantic image and video understanding as well as efficient and reliable indexing into visual databases with a specific focus on indexing visual information captured by mobile users into Internet resources. The aim is demonstrate future applications and push innovation in the field of mobile visual search.

This is a collaborative effort with C. Schmid (Inria Grenoble) and S. Carlsson (KTH Stockholm).

European Research Council (ERC) Advanced Grant

Participants : Jean Ponce, Ivan Laptev, Josef Sivic.

WILLOW will be funded in part from 2011 to 2015 by the ERC Advanced Grant "VideoWorld" awarded to Jean Ponce by the European Research Council.

This project is concerned with the automated computer analysis of video streams: Digital video is everywhere, at home, at work, and on the Internet. Yet, effective technology for organizing, retrieving, improving, and editing its content is nowhere to be found. Models for video content, interpretation and manipulation inherited from still imagery are obsolete, and new ones must be invented. With a new convergence between computer vision, machine learning, and signal processing, the time is right for such an endeavor. Concretely, we will develop novel spatio-temporal models of video content learned from training data and capturing both the local appearance and nonrigid motion of the elements—persons and their surroundings—that make up a dynamic scene. We will also develop formal models of the video interpretation process that leave behind the architectures inherited from the world of still images to capture the complex interactions between these elements, yet can be learned effectively despite the sparse annotations typical of video understanding scenarios. Finally, we will propose a unified model for video restoration and editing that builds on recent advances in sparse coding and dictionary learning, and will allow for unprecedented control of the video stream. This project addresses fundamental research issues, but its results are expected to serve as a basis for groundbreaking technological advances for applications as varied as film post-production, video archival, and smart camera phones.